68 research outputs found

    Are regulations safe? Reflections from developing a digital cancer decision support tool

    Get PDF
    PURPOSE Informatics solutions to early diagnosis of cancer in primary care are increasingly prevalent, but it is not clear whether existing and planned standards and regulations sufficiently address patients' safety nor whether these standards are fit for purpose. We use a patient safety perspective to reflect on the development of a computerized cancer risk assessment tool embedded within a UK primary care electronic health record system. METHODS We developed a computerized version of the CAncer Prevention in ExetER studies risk assessment tool, in compliance with the European Union's Medical Device Regulations. The process of building this tool afforded an opportunity to reflect on clinical concerns and whether current regulations for medical devices are fit for purpose. We identified concerns for patient safety and developed nine practical recommendations to mitigate these concerns. RESULTS We noted that medical device regulations (1) were initially created for hardware devices rather than software, (2) offer one-shot approval rather than supporting iterative innovation and learning, (3) are biased toward loss-transfer approaches that attempt to manage the fallout of harm instead of mitigating hazards becoming harmful, and (4) are biased toward known hazards, despite unknown hazards being an expected consequence of health care as a complex adaptive system. Our nine recommendations focus on embedding less-reductionist and stronger system perspectives into regulations and standards. CONCLUSION Our intention is to share our experience to support research-led collaborative development of health informatics solutions in cancer. We argue that regulations in the European Union do not sufficiently address the complexity of healthcare information systems with consequences for patient safety. Future standards and regulations should continue to follow a system-based approach to risk, safety, and accident avoidance

    Process Mining of Disease Trajectories: A Literature Review

    Get PDF
    Disease trajectories model patterns of disease over time and can be mined by extracting diagnosis codes from electronic health records (EHR). Process mining provides a mature set of methods and tools that has been used to mine care pathways using event data from EHRs and could be applied to disease trajectories. This paper presents a literature review on process mining related to mining disease trajectories using EHRs. Our review identified 156 papers of potential interest but only four papers which directly applied process mining to disease trajectory modelling. These four papers are presented in detail covering data source, size, selection criteria, selections of the process mining algorithms, trajectory definition strategies, model visualisations, and the methods of evaluation. The literature review lays the foundations for further research leveraging the established benefits of process mining for the emerging data mining of disease trajectories

    Effect of a hospital command centre on patient safety: An interrupted time series study

    Get PDF
    Background Command centres have been piloted in some hospitals across the developed world in the last few years. Their impact on patient safety, however, has not been systematically studied. Hence, we aimed to investigate this. Methods This is a retrospective population-based cohort study. Participants were patients who visited Bradford Royal Infirmary Hospital and Calderdale & Huddersfield hospitals between 1 January 2018 and 31 August 2021. A five-phase, interrupted time series, linear regression analysis was used. Results After introduction of a Command Centre, while mortality and readmissions marginally improved, there was no statistically significant impact on postoperative sepsis. In the intervention hospital, when compared with the preintervention period, mortality decreased by 1.4% (95% CI 0.8% to 1.9%), 1.5% (95% CI 0.9% to 2.1%), 1.3% (95% CI 0.7% to 1.8%) and 2.5% (95% CI 1.7% to 3.4%) during successive phases of the command centre programme, including roll-in and activation of the technology and preparatory quality improvement work. However, in the control site, compared with the baseline, the weekly mortality also decreased by 2.0% (95% CI 0.9 to 3.1), 2.3% (95% CI 1.1 to 3.5), 1.3% (95% CI 0.2 to 2.4), 3.1% (95% CI 1.4 to 4.8) for the respective intervention phases. No impact on any of the indicators was observed when only the software technology part of the Command Centre was considered. Conclusion Implementation of a hospital Command Centre may have a marginal positive impact on patient safety when implemented as part of a broader hospital-wide improvement programme including colocation of operations and clinical leads in a central location. However, improvement in patient safety indicators was also observed for a comparable period in the control site. Further evaluative research into the impact of hospital command centres on a broader range of patient safety and other outcomes is warranted

    Postoperative mortality and complications in patients with and without pre‐operative SARS‐CoV‐2 infection: a service evaluation of 24 million linked records using OpenSAFELY

    Get PDF
    Surgical decision-making after SARS-CoV-2 infection is influenced by the presence of comorbidity, infection severity and whether the surgical problem is time-sensitive. Contemporary surgical policy to delay surgery is informed by highly heterogeneous country-specific guidance. We evaluated surgical provision in England during the COVID-19 pandemic to assess real-world practice and whether deferral remains necessary. Using the OpenSAFELY platform, we adapted the COVIDSurg protocol for a service evaluation of surgical procedures that took place within the English NHS from 17 March 2018 to 17 March 2022. We assessed whether hospitals adhered to guidance not to operate on patients within 7 weeks of an indication of SARS-CoV-2 infection. Additional outcomes were postoperative all-cause mortality (30 days, 6 months) and complications (pulmonary, cardiac, cerebrovascular). The exposure was the interval between the most recent indication of SARS-CoV-2 infection and subsequent surgery. In any 6-month window, < 3% of surgical procedures were conducted within 7 weeks of an indication of SARS-CoV-2 infection. Mortality for surgery conducted within 2 weeks of a positive test in the era since widespread SARS-CoV-2 vaccine availability was 1.1%, declining to 0.3% by 4 weeks. Compared with the COVIDSurg study cohort, outcomes for patients in the English NHS cohort were better during the COVIDSurg data collection period and the pandemic era before vaccines became available. Clinicians within the English NHS followed national guidance by operating on very few patients within 7 weeks of a positive indication of SARS-CoV-2 infection. In England, surgical patients' overall risk following an indication of SARS-CoV-2 infection is lower than previously thought

    A stable genetic polymorphism underpinning microbial syntrophy

    Get PDF
    Syntrophies are metabolic cooperations, whereby two organisms co-metabolize a substrate in an interdependent manner. Many of the observed natural syntrophic interactions are mandatory in the absence of strong electron acceptors, such that one species in the syntrophy has to assume the role of electron sink for the other. While this presents an ecological setting for syntrophy to be beneficial, the potential genetic drivers of syntrophy remain unknown to date. Here, we show that the syntrophic sulfate-reducing species Desulfovibrio vulgaris displays a stable genetic polymorphism, where only a specific genotype is able to engage in syntrophy with the hydrogenotrophic methanogen Methanococcus maripaludis. This 'syntrophic' genotype is characterized by two genetic alterations, one of which is an in-frame deletion in the gene encoding for the ion-translocating subunit cooK of the membrane-bound COO hydrogenase. We show that this genotype presents a specific physiology, in which reshaping of energy conservation in the lactate oxidation pathway enables it to produce sufficient intermediate hydrogen for sustained M. maripaludis growth and thus, syntrophy. To our knowledge, these findings provide for the first time a genetic basis for syntrophy in nature and bring us closer to the rational engineering of syntrophy in synthetic microbial communities

    N-gram analysis of 970 microbial organisms reveals presence of biological language models

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>It has been suggested previously that genome and proteome sequences show characteristics typical of natural-language texts such as "signature-style" word usage indicative of authors or topics, and that the algorithms originally developed for natural language processing may therefore be applied to genome sequences to draw biologically relevant conclusions. Following this approach of 'biological language modeling', statistical n-gram analysis has been applied for comparative analysis of whole proteome sequences of 44 organisms. It has been shown that a few particular amino acid n-grams are found in abundance in one organism but occurring very rarely in other organisms, thereby serving as genome signatures. At that time proteomes of only 44 organisms were available, thereby limiting the generalization of this hypothesis. Today nearly 1,000 genome sequences and corresponding translated sequences are available, making it feasible to test the existence of biological language models over the evolutionary tree.</p> <p>Results</p> <p>We studied whole proteome sequences of 970 microbial organisms using n-gram frequencies and cross-perplexity employing the Biological Language Modeling Toolkit and Patternix Revelio toolkit. Genus-specific signatures were observed even in a simple unigram distribution. By taking statistical n-gram model of one organism as reference and computing cross-perplexity of all other microbial proteomes with it, cross-perplexity was found to be predictive of branch distance of the phylogenetic tree. For example, a 4-gram model from proteome of <it>Shigellae flexneri 2a</it>, which belongs to the <it>Gammaproteobacteria </it>class showed a self-perplexity of 15.34 while the cross-perplexity of other organisms was in the range of 15.59 to 29.5 and was proportional to their branching distance in the evolutionary tree from <it>S. flexneri</it>. The organisms of this genus, which happen to be pathotypes of <it>E.coli</it>, also have the closest perplexity values with <it>E. coli.</it></p> <p>Conclusion</p> <p>Whole proteome sequences of microbial organisms have been shown to contain particular n-gram sequences in abundance in one organism but occurring very rarely in other organisms, thereby serving as proteome signatures. Further it has also been shown that perplexity, a statistical measure of similarity of n-gram composition, can be used to predict evolutionary distance within a genus in the phylogenetic tree.</p

    Measurement of the Bottom-Strange Meson Mixing Phase in the Full CDF Data Set

    Get PDF
    We report a measurement of the bottom-strange meson mixing phase \beta_s using the time evolution of B0_s -> J/\psi (->\mu+\mu-) \phi (-> K+ K-) decays in which the quark-flavor content of the bottom-strange meson is identified at production. This measurement uses the full data set of proton-antiproton collisions at sqrt(s)= 1.96 TeV collected by the Collider Detector experiment at the Fermilab Tevatron, corresponding to 9.6 fb-1 of integrated luminosity. We report confidence regions in the two-dimensional space of \beta_s and the B0_s decay-width difference \Delta\Gamma_s, and measure \beta_s in [-\pi/2, -1.51] U [-0.06, 0.30] U [1.26, \pi/2] at the 68% confidence level, in agreement with the standard model expectation. Assuming the standard model value of \beta_s, we also determine \Delta\Gamma_s = 0.068 +- 0.026 (stat) +- 0.009 (syst) ps-1 and the mean B0_s lifetime, \tau_s = 1.528 +- 0.019 (stat) +- 0.009 (syst) ps, which are consistent and competitive with determinations by other experiments.Comment: 8 pages, 2 figures, Phys. Rev. Lett 109, 171802 (2012

    Integrated Assessment of Genomic Correlates of Protein Evolutionary Rate

    Get PDF
    Rates of evolution differ widely among proteins, but the causes and consequences of such differences remain under debate. With the advent of high-throughput functional genomics, it is now possible to rigorously assess the genomic correlates of protein evolutionary rate. However, dissecting the correlations among evolutionary rate and these genomic features remains a major challenge. Here, we use an integrated probabilistic modeling approach to study genomic correlates of protein evolutionary rate in Saccharomyces cerevisiae. We measure and rank degrees of association between (i) an approximate measure of protein evolutionary rate with high genome coverage, and (ii) a diverse list of protein properties (sequence, structural, functional, network, and phenotypic). We observe, among many statistically significant correlations, that slowly evolving proteins tend to be regulated by more transcription factors, deficient in predicted structural disorder, involved in characteristic biological functions (such as translation), biased in amino acid composition, and are generally more abundant, more essential, and enriched for interaction partners. Many of these results are in agreement with recent studies. In addition, we assess information contribution of different subsets of these protein properties in the task of predicting slowly evolving proteins. We employ a logistic regression model on binned data that is able to account for intercorrelation, non-linearity, and heterogeneity within features. Our model considers features both individually and in natural ensembles (“meta-features”) in order to assess joint information contribution and degree of contribution independence. Meta-features based on protein abundance and amino acid composition make strong, partially independent contributions to the task of predicting slowly evolving proteins; other meta-features make additional minor contributions. The combination of all meta-features yields predictions comparable to those based on paired species comparisons, and approaching the predictive limit of optimal lineage-insensitive features. Our integrated assessment framework can be readily extended to other correlational analyses at the genome scale

    Estrogen receptor transcription and transactivation: Basic aspects of estrogen action

    Get PDF
    Estrogen signaling has turned out to be much more complex and exciting than previously thought; the paradigm shift in our understanding of estrogen action came in 1996, when the presence of a new estrogen receptor (ER), ERβ, was reported. An intricate interplay between the classical ERα and the novel ERβ is of paramount importance for the final biological effect of estrogen in different target cells
    corecore